4 research outputs found
Recommended from our members
Deep Probabilistic Graphical Modeling
Probabilistic graphical modeling (PGM) provides a framework for formulating an interpretable generative process of data and expressing uncertainty about unknowns. This makes PGM very useful for understanding phenomena underlying data and for decision making. PGM has seen great success in domains where interpretable inferences are key, e.g. marketing, medicine, neuroscience, and social science. However, PGM tends to lack flexibility, which has hindered its use when it comes to modeling large scale high-dimensional complex data and performing tasks that require flexibility (e.g. in vision and language applications.)
Deep learning (DL) is another framework for modeling and learning from data that has seen great empirical success in recent years. DL is very powerful and offers great flexibility, but it lacks the interpretability and calibration of PGM.
This thesis develops deep probabilistic graphical modeling (DPGM). DPGM consists in leveraging DL to make PGM more flexible. DPGM brings about new methods for learning from data that exhibit the advantages of both PGM and DL.
We use DL within PGM to build flexible models endowed with an interpretable latent structure. One family of models we develop extends exponential family principal component analysis (EF-PCA) using neural networks to improve predictive performance while enforcing the interpretability of the latent factors. Another model class we introduce enables accounting for long-term dependencies when modeling sequential data, which is a challenge when using purely DL or PGM approaches. This model class for sequential data was successfully applied to language modeling, unsupervised document representation learning for sentiment analysis, conversation modeling, and patient representation learning for hospital readmission prediction. Finally, DPGM successfully solves several outstanding problems of probabilistic topic models.
Leveraging DL within PGM also brings about new algorithms for learning with complex data. For example, we develop entropy-regularized adversarial learning, a learning paradigm that deviates from the traditional maximum likelihood approach used in PGM. From the DL perspective, entropy-regularized adversarial learning provides a solution to the long-standing mode collapse problem of generative adversarial networks
Cousins Of The Vendi Score: A Family Of Similarity-Based Diversity Metrics For Science And Machine Learning
Measuring diversity accurately is important for many scientific fields,
including machine learning (ML), ecology, and chemistry. The Vendi Score was
introduced as a generic similarity-based diversity metric that extends the Hill
number of order q=1 by leveraging ideas from quantum statistical mechanics.
Contrary to many diversity metrics in ecology, the Vendi Score accounts for
similarity and does not require knowledge of the prevalence of the categories
in the collection to be evaluated for diversity. However, the Vendi Score
treats each item in a given collection with a level of sensitivity proportional
to the item's prevalence. This is undesirable in settings where there is a
significant imbalance in item prevalence. In this paper, we extend the other
Hill numbers using similarity to provide flexibility in allocating sensitivity
to rare or common items. This leads to a family of diversity metrics -- Vendi
scores with different levels of sensitivity -- that can be used in a variety of
applications. We study the properties of the scores in a synthetic controlled
setting where the ground truth diversity is known. We then test their utility
in improving molecular simulations via Vendi Sampling. Finally, we use the
Vendi scores to better understand the behavior of image generative models in
terms of memorization, duplication, diversity, and sample quality.Comment: Code for evaluating diversity using the Vendi scores can be found at
https://github.com/vertaix/Vendi-Score. Code for using the scores within
Vendi Sampling can be found at https://github.com/vertaix/Vendi-Samplin
LLM-Prop: Predicting Physical And Electronic Properties Of Crystalline Solids From Their Text Descriptions
The prediction of crystal properties plays a crucial role in the crystal
design process. Current methods for predicting crystal properties focus on
modeling crystal structures using graph neural networks (GNNs). Although GNNs
are powerful, accurately modeling the complex interactions between atoms and
molecules within a crystal remains a challenge. Surprisingly, predicting
crystal properties from crystal text descriptions is understudied, despite the
rich information and expressiveness that text data offer. One of the main
reasons is the lack of publicly available data for this task. In this paper, we
develop and make public a benchmark dataset (called TextEdge) that contains
text descriptions of crystal structures with their properties. We then propose
LLM-Prop, a method that leverages the general-purpose learning capabilities of
large language models (LLMs) to predict the physical and electronic properties
of crystals from their text descriptions. LLM-Prop outperforms the current
state-of-the-art GNN-based crystal property predictor by about 4% in predicting
band gap, 3% in classifying whether the band gap is direct or indirect, and 66%
in predicting unit cell volume. LLM-Prop also outperforms a finetuned MatBERT,
a domain-specific pre-trained BERT model, despite having 3 times fewer
parameters. Our empirical results may highlight the current inability of GNNs
to capture information pertaining to space group symmetry and Wyckoff sites for
accurate crystal property prediction.Comment: Code for LLM-Prop can be found at:
https://github.com/vertaix/LLM-Pro
Recommended from our members
Readmission prediction via deep contextual embedding of clinical concepts
Objective
Hospital readmission costs a lot of money every year. Many hospital readmissions are avoidable, and excessive hospital readmissions could also be harmful to the patients. Accurate prediction of hospital readmission can effectively help reduce the readmission risk. However, the complex relationship between readmission and potential risk factors makes readmission prediction a difficult task. The main goal of this paper is to explore deep learning models to distill such complex relationships and make accurate predictions.
Materials and methods
We propose CONTENT, a deep model that predicts hospital readmissions via learning interpretable patient representations by capturing both local and global contexts from patient Electronic Health Records (EHR) through a hybrid Topic Recurrent Neural Network (TopicRNN) model. The experiment was conducted using the EHR of a real world Congestive Heart Failure (CHF) cohort of 5,393 patients.
Results
The proposed model outperforms state-of-the-art methods in readmission prediction (e.g. 0.6103 ± 0.0130 vs. second best 0.5998 ± 0.0124 in terms of ROC-AUC). The derived patient representations were further utilized for patient phenotyping. The learned phenotypes provide more precise understanding of readmission risks.
Discussion
Embedding both local and global context in patient representation not only improves prediction performance, but also brings interpretable insights of understanding readmission risks for heterogeneous chronic clinical conditions.
Conclusion
This is the first of its kind model that integrates the power of both conventional deep neural network and the probabilistic generative models for highly interpretable deep patient representation learning. Experimental results and case studies demonstrate the improved performance and interpretability of the model